x86: suppress SMEP and SMAP while running 32-bit PV guest code
authorJan Beulich <jbeulich@suse.com>
Fri, 13 May 2016 17:12:22 +0000 (18:12 +0100)
committerAndrew Cooper <andrew.cooper3@citrix.com>
Fri, 13 May 2016 17:15:45 +0000 (18:15 +0100)
commitea3e8edfdbabfb17f0d39ed128716ec464f348b8
tree30dd1e2caf31d593eb0a2b648384d66be6f2c519
parentcd2cd109e7db3a7e689c20b8991d41115ed5bea6
x86: suppress SMEP and SMAP while running 32-bit PV guest code

Since such guests' kernel code runs in ring 1, their memory accesses,
at the paging layer, are supervisor mode ones, and hence subject to
SMAP/SMEP checks. Such guests cannot be expected to be aware of those
two features though (and so far we also don't expose the respective
feature flags), and hence may suffer page faults they cannot deal with.

While the placement of the re-enabling slightly weakens the intended
protection, it was selected such that 64-bit paths would remain
unaffected where possible. At the expense of a further performance hit
the re-enabling could be put right next to the CLACs.

Note that this introduces a number of extra TLB flushes - CR4.SMEP
transitioning from 0 to 1 always causes a flush, and it transitioning
from 1 to 0 may also do.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-acked-by: Wei Liu <wei.liu2@citrix.com>
xen/arch/x86/setup.c
xen/arch/x86/x86_64/asm-offsets.c
xen/arch/x86/x86_64/compat/entry.S
xen/arch/x86/x86_64/entry.S
xen/include/asm-x86/asm_defns.h
xen/include/asm-x86/processor.h